Self-paced and self-consistent co-training for semi-supervised image segmentation

نویسندگان

چکیده

Deep co-training has recently been proposed as an effective approach for image segmentation when annotated data is scarce. In this paper, we improve existing approaches semi-supervised with a self-paced and self-consistent method. To help distillate information from unlabeled images, first design learning strategy that lets jointly-trained neural networks focus on easier-to-segment regions first, then gradually consider harder ones. This achieved via end-to-end differentiable loss in the form of generalized Jensen Shannon Divergence (JSD). Moreover, to encourage predictions different be both consistent confident, enhance JSD uncertainty regularizer based entropy. The robustness individual models further improved using self-ensembling enforces their prediction across training iterations. We demonstrate potential our method three challenging problems modalities, small fraction labeled data. Results show clear advantages terms performance compared standard baselines state-of-the-art segmentation.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Self-Paced Co-training

Notation and Definition: We assume that examples are drawn from some distributions D over an instance space X = X1 × X2, where X1 and X2 correspond to two different “views” of examples. Let c denote the target function, and let X and X− (for simplicity we assume we are doing binary classification) denote the positive and negative regions of X , respectively . For i ∈ 1, 2, let X i = {xj ∈ Xi : ...

متن کامل

Deep Co-Training for Semi-Supervised Image Recognition

In this paper, we study the problem of semi-supervised image recognition, which is to learn classifiers using both labeled and unlabeled images. We present Deep Co-Training, a deep learning based method inspired by the Co-Training framework [1]. The original Co-Training learns two classifiers on two views which are data from different sources that describe the same instances. To extend this con...

متن کامل

Semi-supervised Learning using Kernel Self-consistent Labeling

We present a new method for semi-supervised learning based on any given valid kernel. Our strategy is to view the kernel as the covariance matrix of a Gaussian process and predict the label of each instance conditioned on all other instances. We then find a self-consistent labeling of the instances by using the hinge loss on the predictions on labeled data and the ε insensitive loss on predicti...

متن کامل

A Self-training Method for Semi-supervised Gans

Since the creation of Generative Adversarial Networks (GANs), much work has been done to improve their training stability, their generated image quality, their range of application but nearly none of them explored their self-training potential. Self-training has been used before the advent of deep learning in order to allow training on limited labelled training data and has shown impressive res...

متن کامل

Semi-Supervised Self-training Approaches for Imbalanced Splice Site Datasets

Machine Learning algorithms produce accurate classifiers when trained on large, balanced datasets. However, it is generally expensive to acquire labeled data, while unlabeled data is available in much larger amounts. A cost-effective alternative is to use Semi-Supervised Learning, which uses unlabeled data to improve supervised classifiers. Furthermore, for many practical problems, data often e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Medical Image Analysis

سال: 2021

ISSN: ['1361-8423', '1361-8431', '1361-8415']

DOI: https://doi.org/10.1016/j.media.2021.102146